51 research outputs found

    New Integrality Gap Results for the Firefighters Problem on Trees

    Full text link
    The firefighter problem is NP-hard and admits a (1−1/e)(1-1/e) approximation based on rounding the canonical LP. In this paper, we first show a matching integrality gap of (1−1/e+ϵ)(1-1/e+\epsilon) on the canonical LP. This result relies on a powerful combinatorial gadget that can be used to prove integrality gap results for many problem settings. We also consider the canonical LP augmented with simple additional constraints (as suggested by Hartke). We provide several evidences that these constraints improve the integrality gap of the canonical LP: (i) Extreme points of the new LP are integral for some known tractable instances and (ii) A natural family of instances that are bad for the canonical LP admits an improved approximation algorithm via the new LP. We conclude by presenting a 5/65/6 integrality gap instance for the new LP.Comment: 22 page

    Pre-Reduction Graph Products: Hardnesses of Properly Learning DFAs and Approximating EDP on DAGs

    Full text link
    The study of graph products is a major research topic and typically concerns the term f(G∗H)f(G*H), e.g., to show that f(G∗H)=f(G)f(H)f(G*H)=f(G)f(H). In this paper, we study graph products in a non-standard form f(R[G∗H]f(R[G*H] where RR is a "reduction", a transformation of any graph into an instance of an intended optimization problem. We resolve some open problems as applications. (1) A tight n1−ϵn^{1-\epsilon}-approximation hardness for the minimum consistent deterministic finite automaton (DFA) problem, where nn is the sample size. Due to Board and Pitt [Theoretical Computer Science 1992], this implies the hardness of properly learning DFAs assuming NP≠RPNP\neq RP (the weakest possible assumption). (2) A tight n1/2−ϵn^{1/2-\epsilon} hardness for the edge-disjoint paths (EDP) problem on directed acyclic graphs (DAGs), where nn denotes the number of vertices. (3) A tight hardness of packing vertex-disjoint kk-cycles for large kk. (4) An alternative (and perhaps simpler) proof for the hardness of properly learning DNF, CNF and intersection of halfspaces [Alekhnovich et al., FOCS 2004 and J. Comput.Syst.Sci. 2008]

    Independent Set, Induced Matching, and Pricing: Connections and Tight (Subexponential Time) Approximation Hardnesses

    Full text link
    We present a series of almost settled inapproximability results for three fundamental problems. The first in our series is the subexponential-time inapproximability of the maximum independent set problem, a question studied in the area of parameterized complexity. The second is the hardness of approximating the maximum induced matching problem on bounded-degree bipartite graphs. The last in our series is the tight hardness of approximating the k-hypergraph pricing problem, a fundamental problem arising from the area of algorithmic game theory. In particular, assuming the Exponential Time Hypothesis, our two main results are: - For any r larger than some constant, any r-approximation algorithm for the maximum independent set problem must run in at least 2^{n^{1-\epsilon}/r^{1+\epsilon}} time. This nearly matches the upper bound of 2^{n/r} (Cygan et al., 2008). It also improves some hardness results in the domain of parameterized complexity (e.g., Escoffier et al., 2012 and Chitnis et al., 2013) - For any k larger than some constant, there is no polynomial time min (k^{1-\epsilon}, n^{1/2-\epsilon})-approximation algorithm for the k-hypergraph pricing problem, where n is the number of vertices in an input graph. This almost matches the upper bound of min (O(k), \tilde O(\sqrt{n})) (by Balcan and Blum, 2007 and an algorithm in this paper). We note an interesting fact that, in contrast to n^{1/2-\epsilon} hardness for polynomial-time algorithms, the k-hypergraph pricing problem admits n^{\delta} approximation for any \delta >0 in quasi-polynomial time. This puts this problem in a rare approximability class in which approximability thresholds can be improved significantly by allowing algorithms to run in quasi-polynomial time.Comment: The full version of FOCS 201

    Graph Pricing Problem on Bounded Treewidth, Bounded Genus and k-partite graphs

    Full text link
    Consider the following problem. A seller has infinite copies of nn products represented by nodes in a graph. There are mm consumers, each has a budget and wants to buy two products. Consumers are represented by weighted edges. Given the prices of products, each consumer will buy both products she wants, at the given price, if she can afford to. Our objective is to help the seller price the products to maximize her profit. This problem is called {\em graph vertex pricing} ({\sf GVP}) problem and has resisted several recent attempts despite its current simple solution. This motivates the study of this problem on special classes of graphs. In this paper, we study this problem on a large class of graphs such as graphs with bounded treewidth, bounded genus and kk-partite graphs. We show that there exists an {\sf FPTAS} for {\sf GVP} on graphs with bounded treewidth. This result is also extended to an {\sf FPTAS} for the more general {\em single-minded pricing} problem. On bounded genus graphs we present a {\sf PTAS} and show that {\sf GVP} is {\sf NP}-hard even on planar graphs. We study the Sherali-Adams hierarchy applied to a natural Integer Program formulation that (1+ϵ)(1+\epsilon)-approximates the optimal solution of {\sf GVP}. Sherali-Adams hierarchy has gained much interest recently as a possible approach to develop new approximation algorithms. We show that, when the input graph has bounded treewidth or bounded genus, applying a constant number of rounds of Sherali-Adams hierarchy makes the integrality gap of this natural {\sf LP} arbitrarily small, thus giving a (1+ϵ)(1+\epsilon)-approximate solution to the original {\sf GVP} instance. On kk-partite graphs, we present a constant-factor approximation algorithm. We further improve the approximation factors for paths, cycles and graphs with degree at most three.Comment: Preprint of the paper to appear in Chicago Journal of Theoretical Computer Scienc

    Sorting Pattern-Avoiding Permutations via 0-1 Matrices Forbidding Product Patterns

    Full text link
    We consider the problem of comparison-sorting an nn-permutation SS that avoids some kk-permutation π\pi. Chalermsook, Goswami, Kozma, Mehlhorn, and Saranurak prove that when SS is sorted by inserting the elements into the GreedyFuture binary search tree, the running time is linear in the extremal function Ex(Pπ⊗hat,n)\mathrm{Ex}(P_\pi\otimes \text{hat},n). This is the maximum number of 1s in an n×nn\times n 0-1 matrix avoiding Pπ⊗hatP_\pi \otimes \text{hat}, where PπP_\pi is the k×kk\times k permutation matrix of π\pi, ⊗\otimes the Kronecker product, and hat=(∙∙∙)\text{hat} = \left(\begin{array}{ccc}&\bullet&\\\bullet&&\bullet\end{array}\right). The same time bound can be achieved by sorting SS with Kozma and Saranurak's SmoothHeap. In this paper we give nearly tight upper and lower bounds on the density of Pπ⊗hatP_\pi\otimes\text{hat}-free matrices in terms of the inverse-Ackermann function α(n)\alpha(n). \mathrm{Ex}(P_\pi\otimes \text{hat},n) = \left\{\begin{array}{ll} \Omega(n\cdot 2^{\alpha(n)}), & \mbox{for most $\pi$,}\\ O(n\cdot 2^{O(k^2)+(1+o(1))\alpha(n)}), & \mbox{for all $\pi$.} \end{array}\right. As a consequence, sorting π\pi-free sequences can be performed in O(n2(1+o(1))α(n))O(n2^{(1+o(1))\alpha(n)}) time. For many corollaries of the dynamic optimality conjecture, the best analysis uses forbidden 0-1 matrix theory. Our analysis may be useful in analyzing other classes of access sequences on binary search trees

    How to Tame Rectangles: Solving Independent Set and Coloring of Rectangles via Shrinking

    Get PDF

    On Guillotine Cutting Sequences

    Get PDF
    Imagine a wooden plate with a set of non-overlapping geometric objects painted on it. How many of them can a carpenter cut out using a panel saw making guillotine cuts, i.e., only moving forward through the material along a straight line until it is split into two pieces? Already fifteen years ago, Pach and Tardos investigated whether one can always cut out a constant fraction if all objects are axis-parallel rectangles. However, even for the case of axis-parallel squares this question is still open. In this paper, we answer the latter affirmatively. Our result is constructive and holds even in a more general setting where the squares have weights and the goal is to save as much weight as possible. We further show that when solving the more general question for rectangles affirmatively with only axis-parallel cuts, this would yield a combinatorial O(1)-approximation algorithm for the Maximum Independent Set of Rectangles problem, and would thus solve a long-standing open problem. In practical applications, like the mentioned carpentry and many other settings, we can usually place the items freely that we want to cut out, which gives rise to the two-dimensional guillotine knapsack problem: Given a collection of axis-parallel rectangles without presumed coordinates, our goal is to place as many of them as possible in a square-shaped knapsack respecting the constraint that the placed objects can be separated by a sequence of guillotine cuts. Our main result for this problem is a quasi-PTAS, assuming the input data to be quasi-polynomially bounded integers. This factor matches the best known (quasi-polynomial time) result for (non-guillotine) two-dimensional knapsack

    From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More

    Full text link
    We consider questions that arise from the intersection between the areas of polynomial-time approximation algorithms, subexponential-time algorithms, and fixed-parameter tractable algorithms. The questions, which have been asked several times (e.g., [Marx08, FGMS12, DF13]), are whether there is a non-trivial FPT-approximation algorithm for the Maximum Clique (Clique) and Minimum Dominating Set (DomSet) problems parameterized by the size of the optimal solution. In particular, letting OPT\text{OPT} be the optimum and NN be the size of the input, is there an algorithm that runs in t(OPT)poly(N)t(\text{OPT})\text{poly}(N) time and outputs a solution of size f(OPT)f(\text{OPT}), for any functions tt and ff that are independent of NN (for Clique, we want f(OPT)=ω(1)f(\text{OPT})=\omega(1))? In this paper, we show that both Clique and DomSet admit no non-trivial FPT-approximation algorithm, i.e., there is no o(OPT)o(\text{OPT})-FPT-approximation algorithm for Clique and no f(OPT)f(\text{OPT})-FPT-approximation algorithm for DomSet, for any function ff (e.g., this holds even if ff is the Ackermann function). In fact, our results imply something even stronger: The best way to solve Clique and DomSet, even approximately, is to essentially enumerate all possibilities. Our results hold under the Gap Exponential Time Hypothesis (Gap-ETH) [Dinur16, MR16], which states that no 2o(n)2^{o(n)}-time algorithm can distinguish between a satisfiable 3SAT formula and one which is not even (1−ϵ)(1 - \epsilon)-satisfiable for some constant ϵ>0\epsilon > 0. Besides Clique and DomSet, we also rule out non-trivial FPT-approximation for Maximum Balanced Biclique, Maximum Subgraphs with Hereditary Properties, and Maximum Induced Matching in bipartite graphs. Additionally, we rule out ko(1)k^{o(1)}-FPT-approximation algorithm for Densest kk-Subgraph although this ratio does not yet match the trivial O(k)O(k)-approximation algorithm.Comment: 43 pages. To appear in FOCS'1
    • …
    corecore